Patients take care of what their teeth will be like after the orthodontics. Orthodontists usually describe the expectation movement based on the original smile images, which is unconvincing. The growth of deep-learning generative models change this situation. It can visualize the outcome of orthodontic treatment and help patients foresee their future teeth and facial appearance. While previous studies mainly focus on 2D or 3D virtual treatment outcome (VTO) at a profile level, the problem of simulating treatment outcome at a frontal facial image is poorly explored. In this paper, we build an efficient and accurate system for simulating virtual teeth alignment effects in a frontal facial image. Our system takes a frontal face image of a patient with visible malpositioned teeth and the patient's 3D scanned teeth model as input, and progressively generates the visual results of the patient's teeth given the specific orthodontics planning steps from the doctor (i.e., the specification of translations and rotations of individual tooth). We design a multi-modal encoder-decoder based generative model to synthesize identity-preserving frontal facial images with aligned teeth. In addition, the original image color information is used to optimize the orthodontic outcomes, making the results more natural. We conduct extensive qualitative and clinical experiments and also a pilot study to validate our method.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Transformer-based models have gained large popularity and demonstrated promising results in long-term time-series forecasting in recent years. In addition to learning attention in time domain, recent works also explore learning attention in frequency domains (e.g., Fourier domain, wavelet domain), given that seasonal patterns can be better captured in these domains. In this work, we seek to understand the relationships between attention models in different time and frequency domains. Theoretically, we show that attention models in different domains are equivalent under linear conditions (i.e., linear kernel to attention scores). Empirically, we analyze how attention models of different domains show different behaviors through various synthetic experiments with seasonality, trend and noise, with emphasis on the role of softmax operation therein. Both these theoretical and empirical analyses motivate us to propose a new method: TDformer (Trend Decomposition Transformer), that first applies seasonal-trend decomposition, and then additively combines an MLP which predicts the trend component with Fourier attention which predicts the seasonal component to obtain the final prediction. Extensive experiments on benchmark time-series forecasting datasets demonstrate that TDformer achieves state-of-the-art performance against existing attention-based models.
translated by 谷歌翻译
Conceptual knowledge is fundamental to human cognition and knowledge bases. However, existing knowledge probing works only focus on evaluating factual knowledge of pre-trained language models (PLMs) and ignore conceptual knowledge. Since conceptual knowledge often appears as implicit commonsense behind texts, designing probes for conceptual knowledge is hard. Inspired by knowledge representation schemata, we comprehensively evaluate conceptual knowledge of PLMs by designing three tasks to probe whether PLMs organize entities by conceptual similarities, learn conceptual properties, and conceptualize entities in contexts, respectively. For the tasks, we collect and annotate 24k data instances covering 393 concepts, which is COPEN, a COnceptual knowledge Probing bENchmark. Extensive experiments on different sizes and types of PLMs show that existing PLMs systematically lack conceptual knowledge and suffer from various spurious correlations. We believe this is a critical bottleneck for realizing human-like cognition in PLMs. COPEN and our codes are publicly released at https://github.com/THU-KEG/COPEN.
translated by 谷歌翻译
基于文本的视觉问题回答〜(TextVQA)旨在为具有多个场景文本的图像问题提供正确的答案。在大多数情况下,文本自然附着在物体表面上。因此,文本和对象之间的空间推理在文本VQA中至关重要。但是,现有方法在从输入图像中学到的2D空间信息中受到限制,并依靠基于变压器的体系结构在融合过程中隐含地推理。在此设置下,这些2D空间推理方法无法区分同一图像平面上的视觉对象和场景文本之间的细颗粒空间关系,从而损害了TextVQA模型的可解释性和性能。在本文中,我们将3D几何信息引入了类似人类的空间推理过程,以逐步捕获关键对象的上下文知识。 %我们通过引入3D几何信息来捕获关键对象的上下文知识来制定类似人类的空间推理过程。为了增强模型对3D空间关系的理解,特别是(i)〜我们提出了一个关系预测模块,以准确定位关键对象的关注区域; (ii)〜我们设计了一个深度感知的注意校准模块,以根据关键对象校准OCR令牌的注意力。广泛的实验表明,我们的方法在TextVQA和ST-VQA数据集上实现了最先进的性能。更令人鼓舞的是,我们的模型在涉及TextVQA和ST-VQA有效拆分中的空间推理的问题上以5.7 \%和12.1 \%的明显边缘超过了他人。此外,我们还验证了模型对基于文本的图像字幕任务的普遍性。
translated by 谷歌翻译
我们解决了3D室内场景的语言引导语义风格转移的新问题。输入是一个3D室内场景网格和几个描述目标场景的短语。首先,通过多层感知器将3D顶点坐标映射到RGB残基。其次,通过针对室内场景量身定制的视点采样策略将彩色的3D网格分化为2D图像。第三,通过预训练的视觉模型将渲染的2D图像与短语进行比较。最后,错误被反向传播到多层感知器,以更新与某些语义类别相对应的顶点颜色。我们对公共扫描仪和场景数据集进行了大规模定性分析和A/B用户测试。我们证明:(1)视觉令人愉悦的结果,这些结果可能对多媒体应用有用。 (2)从与人类先验一致的观点渲染3D​​室内场景很重要。 (3)合并语义可显着提高样式转移质量。 (4)HSV正则化项会导致结果与输入更一致,并且通常评分更好。代码和用户研究工具箱可从https://github.com/air-discover/lasst获得
translated by 谷歌翻译
随着移动摄影技术的迅速发展,主要的手机制造商正在争先恐后地提高设备的拍摄能力和软件的照片美化算法。但是,智能设备和算法的改进不能取代人类的主观摄影技术。在本文中,我们提出了图像的美学语言指导(ALG)。我们根据指导规则是基于摄影模板还是指导图像,将ALG分为ALG-T和ALG-I。无论是ALG-T还是ALG-I,我们都会从三个颜色,照明和图像组成的属性中指导摄影。输入图像和摄影模板或指导图像之间的三个属性的差异用自然语言描述,即美学自然语言指导(ALG)。另外,由于景观图像和肖像图像之间的照明和组成差异,我们将输入图像分为景观图像和肖像图像。 ALG-T和ALG-I分别针对两种类型的输入图像(景观图像和肖像图像)进行美学指导。
translated by 谷歌翻译
在这项工作中,我们介绍了梯度暹罗网络(GSN)进行图像质量评估。所提出的方法熟练地捕获了全参考图像质量评估(IQA)任务中扭曲的图像和参考图像之间的梯度特征。我们利用中央微分卷积获得图像对中隐藏的语义特征和细节差异。此外,空间注意力指导网络专注于与图像细节相关的区域。对于网络提取的低级,中级和高级功能,我们创新设计了一种多级融合方法,以提高功能利用率的效率。除了常见的均方根错误监督外,我们还进一步考虑了批处理样本之间的相对距离,并成功地将KL差异丢失应用于图像质量评估任务。我们在几个公开可用的数据集上试验了提出的算法GSN,并证明了其出色的性能。我们的网络赢得了NTIRE 2022感知图像质量评估挑战赛1的第二名。
translated by 谷歌翻译
从单个RGB图像中估算3D相互作用的手姿势对于理解人类行为至关重要。与大多数直接预测两只相互作用手的3D姿势的先前作品不同,我们建议分解具有挑战性的相互作用姿势估计任务并分别估算每只手的姿势。这样,就可以直接利用单手姿势估计系统的最新研究进度。然而,由于(1)严重的手部阻塞和(2)手的歧义性,手动姿势估计在相互作用的情况下非常具有挑战性。为了应对这两个挑战,我们提出了一种新型的手部划分和去除(HDR)框架,以执行手部斜切和脱离分散术的去除。我们还提出了第一个称为Amodal intredhand数据集(AIH)的大规模合成Amodal手数据集,以促进模型培训并促进相关研究的开发。实验表明,所提出的方法显着优于先前的最新相互作用姿势估计方法。代码和数据可在https://github.com/menghao666/hdr上找到。
translated by 谷歌翻译
在各种设备上部署深度学习模型已成为一个重要的话题。硬件专业化的浪潮为多维张量计算带来了一套多样化的加速度原始图。这些新的加速原始基原料以及新兴的机器学习模型带来了巨大的工程挑战。在本文中,我们提出了Tensorir,这是一种编译器抽象,用于通过这些张量计算原始素优化程序。Tensorir概括了现有机器学习编译器中使用的循环巢表示,以将张量计算作为一流的公民。最后,我们在抽象之上构建了一个端到端框架,以自动优化给定的张量计算原始图的深度学习模型。实验结果表明,Tensorir编译会自动使用给定硬件后端的张量计算原始图,并提供与跨平台的最新手工精制系统竞争性能的性能。
translated by 谷歌翻译